Lowell
A Benchmarking Study of Vision-based Robotic Grasping Algorithms
Rameshbabu, Bharath K, Balakrishna, Sumukh S, Flynn, Brian, Kapoor, Vinarak, Norton, Adam, Yanco, Holly, Calli, Berk
We present a benchmarking study of vision-based robotic grasping algorithms with distinct approaches, and provide a comparative analysis. In particular, we compare two machine-learning-based and two analytical algorithms using an existing benchmarking protocol from the literature and determine the algorithm's strengths and weaknesses under different experimental conditions. These conditions include variations in lighting, background textures, cameras with different noise levels, and grippers. We also run analogous experiments in simulations and with real robots and present the discrepancies. Some experiments are also run in two different laboratories using same protocols to further analyze the repeatability of our results. We believe that this study, comprising 5040 experiments, provides important insights into the role and challenges of systematic experimentation in robotic manipulation, and guides the development of new algorithms by considering the factors that could impact the performance. The experiment recordings and our benchmarking software are publicly available.
UA-1 PH2 DECISIVE Testing Handbook: Test Methods and Benchmarking Performance Results for sUAS in Dense Urban Environments
Norton, Adam, Donoghue, Brendan, Gavriel, Peter
This report outlines all test methods and reviews all results derived from performance benchmarking of small unmanned aerial systems (sUAS) in dense urban environments conducted during Phase 2 of the Development and Execution of Comprehensive and Integrated Systematic Intelligent Vehicle Evaluations (DECISIVE) project by the University of Massachusetts Lowell (HEROES Project UA-1). Using 9 of the developed test methods, over 100 tests were conducted to benchmark the performance of 8 sUAS platforms: Cleo Robotics Dronut X1P (P = prototype), FLIR Black Hornet 3 PRS, Flyability Elios 2 GOV, Lumenier Nighthawk V3, Parrot ANAFI USA GOV, Skydio X2D, Teal Golden Eagle, and Vantage Robotics Vesper.
Advances in Multi-agent Reinforcement Learning: Persistent Autonomy and Robot Learning Lab Report 2024
Multi-Agent Reinforcement Learning (MARL) approaches have emerged as popular solutions to address the general challenges of cooperation in multi-agent environments, where the success of achieving shared or individual goals critically depends on the coordination and collaboration between agents. However, existing cooperative MARL methods face several challenges intrinsic to multi-agent systems, such as the curse of dimensionality, non-stationarity, and the need for a global exploration strategy. Moreover, the presence of agents with constraints (e.g., limited battery life, restricted mobility) or distinct roles further exacerbates these challenges. This document provides an overview of recent advances in Multi-Agent Reinforcement Learning (MARL) conducted at the Persistent Autonomy and Robot Learning (PeARL) lab at the University of Massachusetts Lowell. We briefly discuss various research directions and present a selection of approaches proposed in our most recent publications. For each proposed approach, we also highlight potential future directions to further advance the field.
Relational Weight Optimization for Enhancing Team Performance in Multi-Agent Multi-Armed Bandits
Kotturu, Monish Reddy, Movahed, Saniya Vahedian, Robinette, Paul, Jerath, Kshitij, Redlich, Amanda, Azadeh, Reza
Using a graph to represent the team behavior ensures that the relationship between Multi-Armed Bandits (MABs) are a class of reinforcement the agents are held. However, existing works either do learning problems where an agent is presented with a set of not consider the weight of each relationship (graph edges) arms (i.e., actions), with each arm giving a reward drawn (Madhushani and Leonard, 2020; Agarwal et al., 2021) or from a probability distribution unknown to the agent expect the user to manually set those weights (Moradipari (Lattimore and Szepesvรกri, 2020). The goal of the agent et al., 2022). is to maximize its total reward which requires balancing In this paper, we propose a new approach that combines exploration and exploitation. MABs offer a simple model graph optimization and MAMAB algorithms to enhance to simulate decision-making under uncertainty. Practical team performance by expediting the convergence to consensus applications of MAB algorithms include news recommendations of arm means. Our proposed approach: (Yang and Toni, 2018), online ad placement (Aramayo et al., 2022), dynamic pricing (Babaioff et al., 2015), improves team performance by optimizing the edge and adaptive experimental design (Rafferty et al., 2019). In weights in the graph representing the team structure contrast to single-agent cases, in certain applications such in large constrained teams, as search and rescue, a team of agents should cooperate does not require manual tuning of the graph weights, with each other to accomplish goals by maximizing team is independent of the MAMAB algorithm and only performance. Such problems are solved using Multi-Agent depends on the consensus formula, and Multi-Armed Bandit (MAMAB) algorithms (Xu et al., formulates the problem as a convex optimization, which 2020). Most existing algorithms rely on the presence of is computationally efficient for large teams.
Generalizable Prediction Model of Molten Salt Mixture Density with Chemistry-Informed Transfer Learning
Barra, Julian, Shahbazi, Shayan, Birri, Anthony, Chahal, Rajni, Isah, Ibrahim, Anwar, Muhammad Nouman, Starkus, Tyler, Balaprakash, Prasanna, Lam, Stephen
Optimally designing molten salt applications requires knowledge of their thermophysical properties, but existing databases are incomplete, and experiments are challenging. Ideal mixing and Redlich-Kister models are computationally cheap but lack either accuracy or generality. To address this, a transfer learning approach using deep neural networks (DNNs) is proposed, combining Redlich-Kister models, experimental data, and ab initio properties. The approach predicts molten salt density with high accuracy ($r^{2}$ > 0.99, MAPE < 1%), outperforming the alternatives.
LLM-Oracle Machines
Contemporary AI applications leverage large language models (LLMs) to harness their knowledge and reasoning abilities for natural language processing tasks. This approach shares similarities with the concept of oracle Turing machines (OTMs). To capture the broader potential of these computations, including those not yet realized, we propose an extension to OTMs: the LLM-oracle machine (LLM-OM), by employing a cluster of LLMs as the oracle. Each LLM acts as a black box, capable of answering queries within its expertise, albeit with a delay. We introduce four variants of the LLM-OM: basic, augmented, fault-avoidance, and $\epsilon$-fault. The first two are commonly observed in existing AI applications. The latter two are specifically designed to address the challenges of LLM hallucinations, biases, and inconsistencies, aiming to ensure reliable outcomes.
Methods for Combining and Representing Non-Contextual Autonomy Scores for Unmanned Aerial Systems
Hertel, Brendan, Donald, Ryan, Dumas, Christian, Ahmadzadeh, S. Reza
Measuring an overall autonomy score for a robotic system requires the combination of a set of relevant aspects and features of the system that might be measured in different units, qualitative, and/or discordant. In this paper, we build upon an existing non-contextual autonomy framework that measures and combines the Autonomy Level and the Component Performance of a system as overall autonomy score. We examine several methods of combining features, showing how some methods find different rankings of the same data, and we employ the weighted product method to resolve this issue. Furthermore, we introduce the non-contextual autonomy coordinate and represent the overall autonomy of a system with an autonomy distance. We apply our method to a set of seven Unmanned Aerial Systems (UAS) and obtain their absolute autonomy score as well as their relative score with respect to the best system.
JMLR: Joint Medical LLM and Retrieval Training for Enhancing Reasoning and Professional Question Answering Capability
Wang, Junda, Yang, Zhichao, Yao, Zonghai, Yu, Hong
Large Language Models (LLMs) have demonstrated a remarkable potential in medical knowledge acquisition and question-answering. However, LLMs can potentially hallucinate and yield factually incorrect outcomes, even with domain-specific pretraining. Previously, retrieval augmented generation (RAG) has limited success in addressing hallucinations. Unlike previous methods in RAG where the retrieval model was trained separately from the LLM, we introduce JMLR (for Jointly trains LLM and information Retrieval) during the fine-tuning phase. The synchronized training mechanism enhances JMLR's ability to retrieve clinical guidelines and leverage medical knowledge to reason and answer questions and reduces the demand for computational resources. We evaluated JMLR on the important medical question-answering application. Our experimental results demonstrate that JMLR-13B (70.5%) outperforms a previous state-of-the-art open-source model using conventional pre-training and fine-tuning Meditron-70B (68.9%) and Llama2-13B with RAG (67.7%) on a medical question-answering dataset. Comprehensive evaluations reveal JMLR-13B enhances reasoning quality and reduces hallucinations better than Claude3-Opus. Additionally, JMLR-13B (148 GPU hours) also trains much faster than Meditron-70B (42630 GPU hours). Through this work, we provide a new and efficient knowledge enhancement method for healthcare, demonstrating the potential of integrating retrieval and LLM training for medical question-answering systems.
Similarity-Aware Skill Reproduction based on Multi-Representational Learning from Demonstration
Hertel, Brendan, Ahmadzadeh, S. Reza
Learning from Demonstration (LfD) algorithms enable humans to teach new skills to robots through demonstrations. The learned skills can be robustly reproduced from the identical or near boundary conditions (e.g., initial point). However, when generalizing a learned skill over boundary conditions with higher variance, the similarity of the reproductions changes from one boundary condition to another, and a single LfD representation cannot preserve a consistent similarity across a generalization region. We propose a novel similarity-aware framework including multiple LfD representations and a similarity metric that can improve skill generalization by finding reproductions with the highest similarity values for a given boundary condition. Given a demonstration of the skill, our framework constructs a similarity region around a point of interest (e.g., initial point) by evaluating individual LfD representations using the similarity metric. Any point within this volume corresponds to a representation that reproduces the skill with the greatest similarity. We validate our multi-representational framework in three simulated and four sets of real-world experiments using a physical 6-DOF robot. We also evaluate 11 different similarity metrics and categorize them according to their biases in 286 simulated experiments.
Learning from Successful and Failed Demonstrations via Optimization
Hertel, Brendan, Ahmadzadeh, S. Reza
Learning from Demonstration (LfD) is a popular approach that allows humans to teach robots new skills by showing the correct way(s) of performing the desired skill. Human-provided demonstrations, however, are not always optimal and the teacher usually addresses this issue by discarding or replacing sub-optimal (noisy or faulty) demonstrations. We propose a novel LfD representation that learns from both successful and failed demonstrations of a skill. Our approach encodes the two subsets of captured demonstrations (labeled by the teacher) into a statistical skill model, constructs a set of quadratic costs, and finds an optimal reproduction of the skill under novel problem conditions (i.e. constraints). The optimal reproduction balances convergence towards successful examples and divergence from failed examples. We evaluate our approach through several 2D and 3D experiments in real-world using a UR5e manipulator arm and also show that it can reproduce a skill from only failed demonstrations. The benefits of exploiting both failed and successful demonstrations are shown through comparison with two existing LfD approaches. We also compare our approach against an existing skill refinement method and show its capabilities in a multi-coordinate setting.